A Brief Discussion on Default Value Behavior in Entity Framework
The origin of this issue was that I found inconsistent handling of null values for string types in our project, so I decided to standardize them to NOT NULL and store empty strings. Shortly after, I saw a colleague write code like this: entity.other = input.item == 3 ? input.other : null;. This confused me because we had just decided to store empty strings that morning; why was he storing null?
I asked my colleague, and he replied that he had tested it and the database column was set to NOT NULL Default ''. Even if the entity property held null, the database would write an empty string during insertion. If it were an update operation, he would explicitly set it to an empty string.
I responded that it should depend on whether the value-setting behavior is triggered. He said he was leaving work and would test it for me the next day. However, I wasn't entirely sure if my understanding was correct, because I knew that when Entity Framework updates data, if the new value is the same as the old value, the return value of SaveChanges() is 0. Therefore, it is possible that it doesn't rely on whether the entity property is set, but rather on whether the new value differs from the old one.
Later, I conducted the test myself and got the same result, but I ultimately told him that his code still needed adjustment regardless. Otherwise, others reading the code would assume it stores null, while in reality, it relies on Entity Framework and database characteristics to store an empty string, which is not intuitive.
Practical Testing
Using SQL Server for testing, first initialize the table with the following SQL.
CREATE TABLE [dbo].[Test](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](50) NOT NULL, -- A dummy column created to test scenarios without values
[TestVarchar] [varchar](50) NULL,
[TestInt] [int] NULL,
CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED (
[Id] ASC
) WITH (
PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF
) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Test] ADD CONSTRAINT [DF_Test_TestVarchar] DEFAULT ('TestVarchar') FOR [TestVarchar]
GO
ALTER TABLE [dbo].[Test] ADD CONSTRAINT [DF_Test_TestInt] DEFAULT ((1234)) FOR [TestInt]
GOSQL Testing
Execute the following SQL commands:
INSERT INTO Test (Name, TestVarchar, TestInt) VALUES ('Name', default, default);
INSERT INTO Test (Name, TestVarchar, TestInt) VALUES ('Name', null, null);
INSERT INTO Test (Name) VALUES ('Name');The resulting SQL output is as follows:
| Name | TestVarchar | TestInt |
|---|---|---|
| Name | TestVarchar | 1234 |
| Name | NULL | NULL |
| Name | TestVarchar | 1234 |
From the results above, we can see that when no value is provided in SQL, or when default is specified, the SQL default value is applied. However, providing null directly does not trigger this, so it is not caused by the SQL mechanism itself.
Entity Framework Testing
Using .NET 8, install Microsoft.EntityFrameworkCore 8.06, and use reverse engineering to create the Entity Framework. The generated code is as follows:
public partial class TestContext : DbContext {
public TestContext(DbContextOptions<TestContext> options)
: base(options) {
}
public virtual DbSet<Test> Tests { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder) {
modelBuilder.Entity<Test>(entity => {
entity.ToTable("Test");
entity.Property(e => e.Name)
.IsRequired()
.HasMaxLength(50)
.IsUnicode(false);
entity.Property(e => e.TestInt).HasDefaultValue(1234);
entity.Property(e => e.TestVarchar)
.HasMaxLength(50)
.IsUnicode(false)
.HasDefaultValue("TestVarchar");
});
OnModelCreatingPartial(modelBuilder);
}
partial void OnModelCreatingPartial(ModelBuilder modelBuilder);
}
public partial class Test {
public int Id { get; set; }
public string Name { get; set; }
public string TestVarchar { get; set; }
public int? TestInt { get; set; }
}Execute the following code:
using (TestContext context = new(dbContextOptions)) {
context.Tests.Add(new() {
Name = "Name"
});
context.SaveChanges();
context.Tests.Add(new() {
Name = "Name",
TestVarchar = null,
TestInt = null
});
context.SaveChanges();
}The generated SQL is as follows:
INSERT INTO [Test] ([Name])
OUTPUT INSERTED.[Id], INSERTED.[TestInt], INSERTED.[TestVarchar]
VALUES (@p0);
INSERT INTO [Test] ([Name])
OUTPUT INSERTED.[Id], INSERTED.[TestInt], INSERTED.[TestVarchar]
VALUES (@p0);The execution results are as follows:
| Name | TestVarchar | TestInt |
|---|---|---|
| Name | TestVarchar | 1234 |
| Name | TestVarchar | 1234 |
From the results above, we can see that whether no value is set or null is provided, the final generated INSERT statement ignores these columns. The inferred reason is that Entity Framework does not determine changes based on whether a property is set, but by comparing the new value with the old value. Since the default values for string and int? are both null, the initial value remains unchanged regardless of whether it was not set or set to null, so the INSERT statement ignores these columns.
When I tested this, I immediately thought of a terrifying scenario: the default value for a C# int is 0, and for bool it is false. If the initial value is set to something else, wouldn't that cause a mismatch between the set value and the result?
struct Type Testing
Use the following SQL to create a second table to test the results.
CREATE TABLE [dbo].[Test2](
[Id] [int] IDENTITY(1,1) NOT NULL,
[TestInt] [int] NOT NULL,
[TestBit] [bit] NOT NULL,
[TestDateTime] [datetime2](7) NOT NULL,
[TestGuid] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_Test2] PRIMARY KEY CLUSTERED (
[Id] ASC
) WITH (
PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON,
OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF
) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Test2] ADD CONSTRAINT [DF_Test2_TestInt] DEFAULT ((1234)) FOR [TestInt]
GO
ALTER TABLE [dbo].[Test2] ADD CONSTRAINT [DF_Test2_TestBit] DEFAULT ((1)) FOR [TestBit]
GO
ALTER TABLE [dbo].[Test2] ADD CONSTRAINT [DF_Test2_TestDateTime] DEFAULT ('2024-01-01 12:00:00') FOR [TestDateTime]
GO
ALTER TABLE [dbo].[Test2] ADD CONSTRAINT [DF_Test2_TestGuid] DEFAULT ('21EC2020-3AEA-1069-A2DD-08002B30309D') FOR [TestGuid]
GOThe entity-related code is as follows:
// DbContext
modelBuilder.Entity<Test2>(entity => {
entity.ToTable("Test2");
entity.Property(e => e.TestBit).HasDefaultValue(true);
entity.Property(e => e.TestDateTime).HasDefaultValue(new DateTime(2024, 1, 1, 12, 0, 0, 0, DateTimeKind.Unspecified));
entity.Property(e => e.TestGuid).HasDefaultValue(new Guid("21ec2020-3aea-1069-a2dd-08002b30309d"));
entity.Property(e => e.TestInt).HasDefaultValue(1234);
});
// Entity
public partial class Test2 {
public int Id { get; set; }
public int TestInt { get; set; }
public bool TestBit { get; set; }
public DateTime TestDateTime { get; set; }
public Guid TestGuid { get; set; }
}Execute the following code:
using (TestContext context = new(dbContextOptions)) {
context.Test2s.Add(new Test2());
context.SaveChanges();
context.Test2s.Add(new() {
TestInt = default,
TestBit = default,
TestDateTime = default,
TestGuid = default,
});
context.SaveChanges();
}The generated SQL is as follows:
INSERT INTO [Test2] ([TestBit])
OUTPUT INSERTED.[Id], INSERTED.[TestDateTime], INSERTED.[TestGuid], INSERTED.[TestInt]
VALUES (@p0);
INSERT INTO [Test2] ([TestBit])
OUTPUT INSERTED.[Id], INSERTED.[TestDateTime], INSERTED.[TestGuid], INSERTED.[TestInt]
VALUES (@p0);The execution results are as follows:
| Name | TestInt | TestBit | TestDateTime | TestGuid |
|---|---|---|---|---|
| Name | 1234 | 0 | 2024-01-01 12:00:00.0000000 | 21EC2020-3AEA-1069-A2DD-08002B30309D |
| Name | 1234 | 0 | 2024-01-01 12:00:00.0000000 | 21EC2020-3AEA-1069-A2DD-08002B30309D |
I breathed a sigh of relief after seeing these results; at least TestBit was not ignored. This means that whether a column is ignored during insertion depends on the type. Otherwise, if I encountered a SQL column with a default value of true and wanted to write false, but it ended up being written as true, that would be a disaster I wouldn't want to see.
However, I had another doubt: TestInt is written as 1234 because SQL has a default value of 1234. If there were no default value, would it write 0, or would it fail to write the data?
I removed the default values for the Test2 table columns in SQL Server, performed reverse engineering again, and re-executed the write code. The generated SQL is as follows:
INSERT INTO [Test2] ([TestBit], [TestDateTime], [TestGuid], [TestInt])
OUTPUT INSERTED.[Id]
VALUES (@p0, @p1, @p2, @p3);
INSERT INTO [Test2] ([TestBit], [TestDateTime], [TestGuid], [TestInt])
OUTPUT INSERTED.[Id]
VALUES (@p0, @p1, @p2, @p3);The execution results are as follows:
| Name | TestInt | TestBit | TestDateTime | TestGuid |
|---|---|---|---|---|
| Name | 0 | 0 | 0001-01-01 00:00:00.0000000 | 00000000-0000-0000-0000-000000000000 |
| Name | 0 | 0 | 0001-01-01 00:00:00.0000000 | 00000000-0000-0000-0000-000000000000 |
From the results above, we can see that when no SQL default value is set, the generated INSERT statement does not ignore the column, even if the entity property is not set or is set to the same value as the C# default.
WARNING
I later tested that if you use EF Core Power Tools for reverse engineering and select .NET 6 or .NET 7 versions, the results for bit columns differ; the entity property type will be bool?, and it will be marked as Required.
// DbContext
modelBuilder.Entity<Test2>(entity => {
entity.ToTable("Test2");
entity.Property(e => e.TestBit)
.IsRequired()
.HasDefaultValueSql("((1))");
entity.Property(e => e.TestDateTime).HasDefaultValueSql("('2024-01-01 12:00:00')");
entity.Property(e => e.TestGuid).HasDefaultValueSql("('21EC2020-3AEA-1069-A2DD-08002B30309D')");
entity.Property(e => e.TestInt).HasDefaultValueSql("((1234))");
});
public partial class Test2 {
public int Id { get; set; }
public int TestInt { get; set; }
public bool? TestBit { get; set; }
public DateTime TestDateTime { get; set; }
public Guid TestGuid { get; set; }
}Looking at the generated SQL, you will find that TestBit is also ignored...
INSERT INTO [Test2]
OUTPUT INSERTED.[Id], INSERTED.[TestBit], INSERTED.[TestDateTime], INSERTED.[TestGuid], INSERTED.[TestInt]
DEFAULT VALUES;
INSERT INTO [Test2]
OUTPUT INSERTED.[Id], INSERTED.[TestBit], INSERTED.[TestDateTime], INSERTED.[TestGuid], INSERTED.[TestInt]
DEFAULT VALUES;Conclusion
- When a SQL Server column has a default value set, Entity Framework may ignore certain column types when generating the INSERT statement during data insertion if the entity property value matches the default value of that type.
- When using Entity Framework, try to avoid using SQL default values. If you must use them, ensure that the SQL default value matches the C# default value to avoid unexpected results. However, for string types, you can set them to
NOT NULL Default ''to handlenulland empty strings uniformly.
Bonus
Updating an Entity with Unchanged Values
As mentioned in the introduction, when updating data, even if the entity property is set but its value remains unchanged, the return value of SaveChanges() will be 0. The following is a test example:
First, use the following SQL to create a new table:
CREATE TABLE [dbo].[Test3](
[Id] [int] IDENTITY(1,1) NOT NULL,
[TestVarchar] [varchar](50) NOT NULL,
[TestInt] [int] NOT NULL,
[TestBit] [bit] NOT NULL,
CONSTRAINT [PK_Test3] PRIMARY KEY CLUSTERED (
[Id] ASC
) WITH (
PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF
) ON [PRIMARY]
) ON [PRIMARY]
GOCreate three identical records with the following values:
| TestVarchar | TestInt | TestBit |
|---|---|---|
| TestVarchar | 1234 | 1 |
| TestVarchar | 1234 | 1 |
| TestVarchar | 1234 | 1 |
using (TestContext context = new(dbContextOptions)) {
Test3 entity = context.Test3s.Single(x => x.Id == 1);
entity.TestVarchar = entity.TestVarchar;
entity.TestInt = entity.TestInt;
entity.TestBit = entity.TestBit;
int changedCount = context.SaveChanges();
Console.WriteLine("EntityState:" + context.Entry(entity).State);
Console.WriteLine("ChangedCount:" + changedCount);
}The console results are as follows:
info: Microsoft.EntityFrameworkCore.Database.Command[20101]
Executed DbCommand (18ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT TOP(2) [t].[Id], [t].[TestBit], [t].[TestInt], [t].[TestVarchar]
FROM [Test3] AS [t]
WHERE [t].[Id] = 1
EntityState:Unchanged
ChangedCount:0From the results above, we can see that even if the entity has values set, the EntityState remains Unchanged, and no Update statement is executed, resulting in SaveChanges() returning 0. Therefore, in the Update method of a Business Service, you should use context.Entry(entity).State == EntityState.Unchanged || context.SaveChanges() > 0 to determine the execution result, avoiding misjudgment when values remain unchanged.
Correct Use of AsNoTracking() to Avoid Unnecessary EntityState Modifications
At my current company, a common mistake colleagues make is not understanding the correct timing for using AsNoTracking(), which leads to using AsNoTracking() when retrieving data via SELECT before performing an UPDATE. This requires manually setting context.Entry(entity).State = EntityState.Modified; to correctly execute the update operation. Although the final result is the same, the generated SQL syntax differs.
I tested this using the following code. The Name column for these three records is Name, and I changed them to NewName using the following code:
using (TestContext context = new(dbContextOptions)) {
Test entity1 = context.Tests.Single(x => x.Id == 1);
entity1.Name = "NewName";
context.SaveChanges();
Test entity2 = context.Tests.Single(x => x.Id == 2);
entity2.Name = "NewName";
context.Entry(entity2).State = EntityState.Modified;
context.SaveChanges();
Test entity3 = context.Tests.AsNoTracking().Single(x => x.Id == 3);
entity3.Name = "NewName";
context.Entry(entity3).State = EntityState.Modified;
context.SaveChanges();
}The SQL syntax generated by this code is as follows:
-- Without using AsNoTracking(), and without manually setting EntityState.Modified
SELECT TOP(2) [t].[Id], [t].[Name], [t].[TestInt], [t].[TestVarchar]
FROM [Test] AS [t]
WHERE [t].[Id] = 1
UPDATE [Test] SET [Name] = @p0
OUTPUT 1
WHERE [Id] = @p1;
-- Without using AsNoTracking(), but manually setting EntityState.Modified
SELECT TOP(2) [t].[Id], [t].[Name], [t].[TestInt], [t].[TestVarchar]
FROM [Test] AS [t]
WHERE [t].[Id] = 2
UPDATE [Test] SET [Name] = @p0, [TestInt] = @p1, [TestVarchar] = @p2
OUTPUT 1
WHERE [Id] = @p3;
-- Using AsNoTracking(), and manually setting EntityState.Modified
SELECT TOP(2) [t].[Id], [t].[Name], [t].[TestInt], [t].[TestVarchar]
FROM [Test] AS [t]
WHERE [t].[Id] = 3
UPDATE [Test] SET [Name] = @p0, [TestInt] = @p1, [TestVarchar] = @p2
OUTPUT 1
WHERE [Id] = @p3;From these results, we can see that a normal UPDATE statement only updates columns that have been set. However, when context.Entry(entity).State = EntityState.Modified; is set manually, it causes all columns to be updated.
Change Log
- 2025-07-12 Initial version created.
